-
Notifications
You must be signed in to change notification settings - Fork 824
新增映射文档 #7081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
新增映射文档 #7081
Conversation
Xuxuanang
commented
Mar 6, 2025
- 新增转换规则 PaConvert#555
感谢你贡献飞桨文档,文档预览构建中,Docs-New 跑完后即可预览,预览链接:http://preview-pr-7081.paddle-docs-preview.paddlepaddle.org.cn/documentation/docs/zh/api/index_cn.html |
docs/guides/model_convert/convert_from_pytorch/api_difference/cuda/torch.cuda.device_of.md
Outdated
Show resolved
Hide resolved
docs/guides/model_convert/convert_from_pytorch/api_difference/cuda/torch.cuda.get_rng_state.md
Show resolved
Hide resolved
...nvert/convert_from_pytorch/api_difference/distributed/torch.distributed.monitored_barrier.md
Show resolved
Hide resolved
...nvert_from_pytorch/api_difference/nn/torch.nn.modules.module.register_module_forward_hook.md
Show resolved
Hide resolved
docs/guides/model_convert/convert_from_pytorch/api_difference/torch/torch.get_default_device.md
Show resolved
Hide resolved
x = torch.cuda.get_rng_state(device='cuda:0') | ||
|
||
# Paddle 写法,返回 GeneratorState 对象 | ||
x = paddle.get_cuda_rng_state()[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果device无输入的话,应该是取第一个值吗?paddle.get_cuda_rng_state()[0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个在Matcher部分又重新修改了一下
...nvert/convert_from_pytorch/api_difference/distributed/torch.distributed.monitored_barrier.md
Outdated
Show resolved
Hide resolved
上面的改了吗 |
docs/guides/model_convert/convert_from_pytorch/api_difference/cuda/torch.cuda.device_of.md
Outdated
Show resolved
Hide resolved
|
||
```python | ||
# PyTorch 写法,返回 torch.ByteTensor | ||
x = torch.cuda.get_rng_state(device='cuda:0') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是对device的转写,再加一个对返回参数的转写。对返回参数的转写,不需要传入device参数。
| hook | hook | 被注册为 forward pre-hook 的函数。 | | ||
| prepend | - | 钩子执行顺序控制,Paddle 无此参数,暂无转写方式。 | | ||
| with_kwargs | - | 是否传递关键字参数,Paddle 无此参数,暂无转写方式。 | | ||
| always_call | - | 是否强制调用钩子,Paddle 无此参数,暂无转写方式。 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prepend会控制pre、post吗?always_call会导致不调用吗?with_kwargs影响参数传递方式吗?
得看看这三个参数的影响是什么,是否会导致paddle的运行结果不同,如果对运行结果无影响就不用管 直接删除了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这三个参数应该都有可能会导致运行结果不同,prepend控制当前注册的 hook 相对于已有 hook 的调用顺序,always_call如果设置为True的话,则在抛出异常会调用,with_kwargs会影响传递给hook的参数
@@ -1040,16 +1041,8 @@ | |||
| 序号 | Pytorch 最新 release | Paddle develop | 映射关系分类 | 备注 | | |||
| ----- | ----------- | ----------------- | ----------- | ------- | | |||
| IN-DEVELOPMENT-PATTERN(`torch.nn.parameter.UninitializedParameter`, https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedParameter.html#torch.nn.parameter.UninitializedParameter) | | |||
| IN-DEVELOPMENT-PATTERN(`torch.nn.modules.module.register_module_forward_pre_hook`, https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html#torch-nn-modules-module-register-module-forward-pre-hook) | | |||
| IN-DEVELOPMENT-PATTERN(`torch.nn.modules.module.register_module_forward_hook`, https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html#torch-nn-modules-module-register-module-forward-hook) | | |||
| IN-DEVELOPMENT-PATTERN(`torch.cuda.device_of`, https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html#torch.cuda.device_of) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个分类不对,另外放到功能缺失里,你得细分一下类。
功能缺失也有好几类。
torch.cuda.device_of这个还需要修改映射文档,对齐PaConvert |
这个不是已经删除,对应到功能缺失了吗 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -0,0 +1,22 @@ | |||
## [ torch 参数更多 ]torch.nn.modules.module.register_module_forward_hook |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有几处错误:
-
torch的这两个API是给全局所有layer注册的hook,而paddle是单个layer的,这个差异必须要强调
torch.nn.modules.module.register_module_forward_hook
torch.nn.modules.module.register_module_forward_pre_hook -
我查阅文档,没有发现register_module_forward_hook的prepend参数,这个参数是怎么来的?
https://pytorch.org/docs/main/generated/torch.nn.modules.module.register_module_forward_hook.html#torch-nn-modules-module-register-module-forward-hook -
always_call我觉得不影响计算的结果,可以算作直接删除
这么看这两个API是无法自动转换的,可以写映射文档但Matcher是无法实现的(因为paddle没有类似的全局注册的API,只有单个注册的,没法将全局自动转为单个的逐个注册)。而PaConvert写的也有问题,由于单测是错的,直接用的Module.register_forward_hook
,压根就没跑torch.nn.modules.module.register_module_forward_hook
,导致两个错误Matcher没有测试到。
后续还是要保证代码的正确性。这些API只是初筛认为有对应功能,如果没有对应功能,就按你思考与判断的结论来处理,而不是一定要弄个错误的上去。